Skip to content

DRY up batched KVStore reads utility methods#876

Open
tnull wants to merge 1 commit intolightningdevkit:mainfrom
tnull:2026-04-batching-store
Open

DRY up batched KVStore reads utility methods#876
tnull wants to merge 1 commit intolightningdevkit:mainfrom
tnull:2026-04-batching-store

Conversation

@tnull
Copy link
Copy Markdown
Collaborator

@tnull tnull commented Apr 15, 2026

DRY up batched KVStore reads with `read_all_objects` helper

The parallel `JoinSet`-based batching loop was duplicated across
`read_payments` and `read_pending_payments`. Extract it into a generic
`read_all_objects<T: Readable>` helper that callers invoke directly
with the relevant namespace constants. Per-type log messages are
preserved via `std::any::type_name::<T>()`.

Co-Authored-By: HAL 9000

@tnull tnull requested a review from joostjager April 15, 2026 13:54
@ldk-reviews-bot
Copy link
Copy Markdown

ldk-reviews-bot commented Apr 15, 2026

👋 Thanks for assigning @joostjager as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@tnull tnull self-assigned this Apr 16, 2026
@tnull tnull moved this to Goal: Merge in Weekly Goals Apr 16, 2026
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 1st Reminder

Hey @joostjager! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 2nd Reminder

Hey @joostjager! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@tnull tnull added this to the 0.8 milestone Apr 20, 2026
@Camillarhi
Copy link
Copy Markdown
Contributor

LGTM

@Camillarhi
Copy link
Copy Markdown
Contributor

This now needs a rebase, there's a merge conflict.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 3rd Reminder

Hey @joostjager! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Copy link
Copy Markdown
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure a generic client-side read throttle is the right abstraction here.

I think the motivation is mainly VSS-specific? For local stores such as SQLite, this mostly limits task fan-out rather than real backend concurrency. Perhaps it should be implemented only in the VSS client. And perhaps longer term, a multi-key read would be helpful in the VSS protocol?

Comment thread src/builder.rs
Comment thread src/io/utils.rs Outdated
@tnull
Copy link
Copy Markdown
Collaborator Author

tnull commented Apr 24, 2026

I am not sure a generic client-side read throttle is the right abstraction here.

Hmm, I tend to agree, though pre-existing.

I think the motivation is mainly VSS-specific?

Well, yes, presumably postgres could also benefit from it, though the store probably should switch to connection pooling anyways?

For local stores such as SQLite, this mostly limits task fan-out rather than real backend concurrency. Perhaps it should be implemented only in the VSS client. And perhaps longer term, a multi-key read would be helpful in the VSS protocol?

Yeah, maybe for now I should just switch to wrap VssStore in BatchingStore in this PR? WDYT?

Copy link
Copy Markdown
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, maybe for now I should just switch to wrap VssStore in BatchingStore in this PR? WDYT?

I think this is a good move, and later expand either towards protocol-level multi reads for vss, or perhaps generalize for other backends if the need arises.

@tnull
Copy link
Copy Markdown
Collaborator Author

tnull commented Apr 28, 2026

Discussed offline, decided we're not going with the BatchingStore abstraction right now, but only DRY up the read_ utility methods in io/utils.rs in this PR.

@tnull tnull changed the title DRY up batched KVStore reads with semaphore-based BatchingStore DRY up batched KVStore reads utility methods Apr 28, 2026
@tnull tnull force-pushed the 2026-04-batching-store branch from 5b8cebf to 22866ab Compare April 28, 2026 09:39
@tnull tnull requested a review from joostjager April 28, 2026 09:40
joostjager
joostjager previously approved these changes Apr 28, 2026
Copy link
Copy Markdown
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, uncontentious. Rebase needed.

Comment thread src/io/utils.rs
kv_store: &DynStore, logger: L,
) -> Result<Vec<PaymentDetails>, std::io::Error>
/// Read all objects of type `T` from the given namespace, spawning reads in parallel.
pub(crate) async fn read_all_objects<T, L>(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I briefly had to think of making this an extension method, but don't think it is a great idea.

The parallel `JoinSet`-based batching loop was duplicated across
`read_payments` and `read_pending_payments`. Extract it into a generic
`read_all_objects<T: Readable>` helper that callers invoke directly
with the relevant namespace constants. Per-type log messages are
preserved via `std::any::type_name::<T>()`.

Co-Authored-By: HAL 9000
@tnull
Copy link
Copy Markdown
Collaborator Author

tnull commented Apr 28, 2026

Looks good, uncontentious. Rebase needed.

Rebased, sorry, forgot about that before.

@tnull tnull requested a review from joostjager April 28, 2026 11:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Goal: Merge

Development

Successfully merging this pull request may close these issues.

4 participants